3 research outputs found

    Motivating Novice Crowd Workers through Goal Setting: An Investigation into the Effects on Complex Crowdsourcing Task Training

    No full text
    Training workers within a task is one way of enabling novice workers, who may lack domain knowledge or experience, to work on complex crowdsourcing tasks. Based on goal setting theory in psychology, we conduct a randomized experiment to study whether and how setting different goals—including performance goal, learning goal, and behavioral goal—when training workers for a complex crowdsourcing task affects workers’ learning perception, learning gain, and post-training performance. We find that setting different goals during training significantly affects workers’ learning perception, but overall does not have an effect on learning gain or post-training performance. However, higher levels of learning gain can be obtained when setting learning goals for workers who are highly learning-oriented. Additionally, giving workers a challenging behavioral goal can nudge them to adopt desirable behavior meant to improve learning and performance, though the adoption of such behavior does not lead to as much improvement as when the worker decides to take part in the behavior themselves. We conclude by discussing the lessons we’ve learned on how to effectively utilize goals in complex crowdsourcing task training

    Characterizing Time Spent in Video Object Tracking Annotation Tasks: A Study of Task Complexity in Vehicle Tracking

    No full text
    Video object tracking annotation tasks are a form of complex data labeling that is inherently tedious and time-consuming. Prior studies of these tasks focus primarily on quality of the provided data, leaving much to be learned about how the data was generated and the factors that influenced how it was generated. In this paper, we take steps toward this goal by examining how human annotators spend their time in the context of a video object tracking annotation task. We situate our study in the context of a standard vehicle tracking task with bounding box annotation. Within this setting, we study the role of task complexity by controlling two dimensions of task design -- label constraint and label granularity -- in conjunction with worker experience. Using telemetry and survey data collected from 40 full-time data annotators at a large technology corporation, we find that each dimension of task complexity uniquely affects how annotators spend their time not only during the task, but also before it begins. Furthermore, we find significant misalignment in how time-use was observed and how time-use was self-reported. We conclude by discussing the implications of our findings in the context of video object tracking and the need to better understand how productivity can be defined in data annotation
    corecore